Explaining and Repairing Plans that Fail

نویسنده

  • Kristian J. Hammond
چکیده

A persistent problem in machine planning is that of repairing plans that fail. One approach to this problem that has been shown to be quite powerful is based on the idea that detailed descriptions of the causes of a failure can be used to decide between the different repairs that can be applied. This paper presents an approach to repair in which plan failures are described in terms of causal explanations of why they occurred. These domain-level explanations are used to access abstract repair strategies, which are then used to make specific changes to the faulty plans. The approach is demonstrated using examples from CHEF, a casebased planner that creates and debugs plans in the domain of Szechwan cooking. While the approach discussed here is examined in terms of actual plan failures, this technique can also be used in the repair of plans that are discovered to be faulty prior to their actual running. 1. The Problem of Plan Failure All planners face the problem of plans that fail. As a result, most planners have some mechanism for plan repair or debugging. These range from simple replanning and backtracking [5] to analytically driven critics [16] and meta-planning techniques [27]. The most powerful approaches to repair, however, have been those that make use of the notion that a planner should first explain and then debug any failures that are detected (e.g., [21, 29]). This work builds on a tradition initially established by Sussman [24] of approaching plan repair from the point of view of developing vocabularies to construct descriptions of planning problems that can be used to select specific and thus powerful repairs. This paper presents an approach to plan repair that uses a causal description of why a failure has occurred to index to a set of strategies for repairing it. This approach makes use of a set of repair rules that are indexed by the causal descriptions of the problems that they solve. The repair rules themselves are empty frames that are instantiated using the specific steps and states of a problem at hand. This combination of abstract repair knowledge and specific state information results in a repair mechanism that is both general (in that it can be applied to a wide range of problems) and powerful (in that the instantiated repairs are aimed directly at the specific details of any given problem). The approach is embedded in the computer program CHEF, a case-based planner that creates new plans in the domain of Szechwan cooking. The difference between the approach taken in CHEF and that taken by other planners stems from CHEF'S use of causal explanations of its own failures to focus on and select relevant repairs. 2. Repair in CHEF CHEF'S approach to plan repair is based on its ability to explain its own failures. This process of failure repair can be broken down into five basic steps: (1) Notice the failure. (2) Build a causal explanation of why it happened. (3) Use the explanation to find a collection of repair strategies. (4) Instantiate each of the repair strategies, using the specific steps and states in the problem. (5) Choose and implement the best of the instantiated repairs. The reasons behind most of these steps are straightforward. CHEF has to notice a failure before it can react to it at all. It has to try each of its strategies in order to choose the best one. It has to implement ∗ This paper describes work done at the Yale Artificial Intelligence Lab under the direction of Hoger Schank. This work was supported by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contracts N00014-82-К-0149. N00014-85-K-0108 and N00014-75-C-1111. NSF grant IST-8120451. and Air Force contract F49620-82-K-0010. It was also supported by the Defense Advanced Research Projects Agency, monitored by the Air Force Office of Scientific Research under contract F49620-88-C0058, and the Office of Naval Research under contract N0014-85-K-010. one of them to fix the plan. The only steps that might not seem as straightforward are the second and third steps of building an explanation and using it to find a structure in memory that corresponds to the problem and organizes solutions to it. When CHEF encounters a failure in one of its own plans, it uses a set of causal rules to explain why the failure has occurred. This explanation includes a description of the steps and states that led to the failure as well as a description of the goals that were being planned for by those steps and states. This description is used to discriminate down to the set of abstract repair strategies appropriate to the general description of the problem. CHEF then tries to implement each of the different strategies and then choose the one best suited for the specific problem. The problem that this technique addresses is one that crops up again and again in knowledgeintensive systems: the problem of the control of knowledge access (McDermott [14], Stefik [23] and Wilensky [27]). In this case, the problem takes the form of choosing which plan repair, among many possible repairs, should be applied to a faulty plan. The goal is to have a planner that is able to diagnose its own failures and then use this diagnosis to choose and apply the repair strategy that will result in a corrected plan. The approach described in this paper allows a planner to do this diagnosis and repair without having to do an exhaustive or even extensive search of the possible results of applying the different plan changes that the planner has available. Further, the specific repair is selected on the basis of its ability to correct an existing problem while avoiding the introduction of any new ones. 3. An Overview of CHEF Before getting into the repair method used in CHEF, it is important to understand CHEF as planner. CHEF is a case-based planner that, like other case-based reasoning systems, such as those of Carbonell [3] and Kolodner et al. [10. 11], builds new plans out of its memory of old ones. Its domain is Szechwan cooking and its task is to build new recipes on the basis of a user's requests. CHEF'S input is a set of goals for different tastes, textures, ingredients and types of dishes, and its output is a single recipe that satisfies all of its goals. Its basic approach is to find a past plan in memory that satisfies as many of the most important goals as possible and then modify that plan to satisfy the other goals as well. Before searching for a plan to modify, CHEF examines the goals in its input and predicts any failures that might rise out of the interactions between the plans for satisfying them. This prediction is made on the basis of past problems that the planner has encountered rather than an exhaustive base of rules. When a failure is predicted, CHEF adds a goal to avoid the failure to its list of goals to satisfy and this new goal is also used to search for a plan. In one CHEF example, the program predicts that stir frying chicken with snow peas will lead to soggy snow peas because the chicken will sweat liquid into the pan. In response to this prediction, the program searches its memory for a stir fry plan that avoids the problem of vegetables getting soggy when cooked with meats. In doing so, it finds a past plan for beef and broccoli that solves this problem by stir frying the vegetable and meat separately. The important similarity between the current situation and the one for which the past plan was built is that the same problem rises out of the interaction between the planner's goals, although the goals themselves are different. This similarity is what allows the planner to access the plan from the past, given the description of the goal interaction in the present. The power of CHEF lies in its ability to anticipate and thus avoid failures it has encountered before. The topic of this paper, however, is its ability to repair these planning failures when it first encounters them. CHEF consists of six modules: (1) An ANTICIPATOR predicts planning problems on the basis of the failures that have been caused by the interaction of goals similar to those in the current input. (2) A RETRIEVER searches CHEF'S plan memory for a plan that satisfies as many of the current goals as possible while avoiding the problems that the ANTICIPATOR has predicted. (3) A MODIFIER alters the plan found by the RETRIEVER to achieve any goals from the input that it does not satisfy. (4) A REPAIRER is called if a plan fails. It builds up a causal explanation of the failure, repairs the plan and then hands it to the STORER for indexing. (5) An ASSIGNER uses the causal explanation built by the REPAIRER to determine the features which will predict this failure in the future. (6) A STORER places new plans in memory, indexed by the goals that they satisfy and the problems that they avoid. 1 For a more detailed discussion of CHEF's planning mechanisms, see [8]. The important module for plan repair is the REPAIRER. This module is handed a plan only after it has been run in CHEF'S version of the real world, a "cook's world" simulation in which the effects of actions are computed. The rules used in this simulation are able to describe the results of the plan CHEF has created in sufficient detail for it to notice the difference between successful and unsuccessful plans. The final states of a simulation are placed on a table of results that CHEF compares against the goals that it believes that a plan should satisfy. Plans that fail in one way or another to satisfy all of their goals are handed to the REPAIRER for repair. CHEF'S algorithm is an example of what McDermott has referred as dependency-directed transformation in which a preliminary plan is drawn from a library of plan schemas and then incrementally debugged, either in response to simulation or execution. Another example of this approach is Simmons' "Generate, Test and Debug" (GTD) [21] a problem solver that combines associational rules with a deep causal model. Given a problem, the associational rules are used to generate preliminary (and potentially buggy) solutions and then the causal model is used to debug them incrementally. The main difference between the two models lies in the attitude each takes towards its domain. One of Simmons' goals in GTD is a domain-independent model of planning, while the goal in CHEF was to develop a planner that learns (through the debugging of faulty plans) how better to act and react in its domain of interest. In some sense, the attitude taken in the development of CHEF was that there is a tradeoff between domain expertise and problem-solving generality, but that this expertise can be learned from weak methods. 4. Case-Based Reasoning CHEF is part of a growing movement in Artificial Intelligence that involves the use of memory in planning (e.g., [1, 11]), problem solving (e.g., [15, 22]), language understanding [13] and diagnostic systems [12]. The main feature that links all of these systems together is that the reasoning they perform is driven by an episodic memory rather than a base of inference rules or plan operators. Like CHEF, PLEXUS [1] and MEDIATOR [11] have tried to address issues of planning from a dynamic memory. PLEXUS' memory was designed to provide nearly right plans for an execution-time improvisional system. MEDIATOR, on the other hand, used plans pulled from memory as starting points to guide new reasoning. Together, CHEF, PLEXUS and MEDIATOR cover much of the range of planning problems that occur in the world. PLEXUS deals with those execution-time problems that can be solved by directly applying an existing plan selected from memory on the basis of environmental cues. MEDIATOR is aimed more at problems that rise out of classification errors which can be solved by building new categories in memory. CHEF, on the other hand, is aimed at problems of the interaction between planning steps that are discovered during plan execution. Like CHEF'S, MEDIATOR'S memory is changed as a result of its experience. The major difference between CHEF and PLEXUS lies in the use of causal knowledge. In CHEF, all repair is based on the use of a deep domain model that provides explanations of the failures that occur. This gives CHEF the ability to repair a wide range of problems, but limits it to those problems that it has the ability to explain. PLEXUS, on the other hand, does not require a causal model in that it repairs plans by selecting alternative subplans from memory on the basis of environmental features. This frees it from the need for an explicit domain model, but also limits it in terms of both correctness and range of possible repairs. In CHEF, the analog to the improvisation in PLEXUS is the use of object critics to tweak an almost right plan. It is clear that both these approaches are needed in order to create a functioning case-based planner. The major difference between CHEF and MEDIATOR is that of vocabulary. One of the major issues in CHEF is the use of abstract descriptions of planning problems in indexing. In CHEF, solutions, in the form of general strategies as well as specific plans, are indexed using a vocabulary of goal and plan interaction. In MEDIATOR, the vocabulary was more closely linked to the features of the domains in which it operated. Here the trade-off is again between the overhead involved with inference needed to apply the more abstract vocabulary and the power gained through its use. As with PLEXUS, the relationship between CHEF and MEDIATOR is a complementary one. Well-understood problems can be handled faster using the vocabulary in MEDIATOR while newer problems require the use of a domain model. The analog to the MEDIATOR vocabulary in CHEF is the network of links between surface features and memories of existing problems that is used to anticipate those problems when those features are present. Another line of research in case-based reasoning that has shown a great deal of promise has been in the area of legal reasoning [15]. This work has centered around the use of cases, in the legal sense of the word, to support decision making and argument. Like the CHEF research, much of the thrust in this work has been aimed at the issue of representation. In particular, it has involved the use of a strategic approach to the alteration of input features so as to find relevant cases in memory. This use of strategic modification has resulted in systems that can retrieve cases from memory that are similar to but strategically different from an initial input case. By finding those cases that differ along specific dimensions, the system can reinforce its own arguments as well as anticipate problems that an adversary may throw in its way. The most exciting aspect of this work is the use of explicit strategies in the selection of features for use in indexing. This strategic approach has been all but ignored in other casebased systems yet is clearly an important aspect of human reasoning [20]. Along with the generative work in planning and problem solving, the case-based approach has been successfully used in both language understanding and diagnosis. In their work on using memory in parsing, Martin and Riesbeck [13] have advanced the idea of parsing directly into a dynamic memory by passing markers up from lexical items to concepts in memory. This activation results in predications also being passed through memory back down to particular lexical items. The overall approach involves treating understanding primarily as a recognition task in which all aspects of understanding (i.e., parsing, inference and disambiguation) are handled through the use of a single mechanism and memory. In recent work in diagnosis, Koton [12] has proposed a case-based approach to diagnosis that again treats the task as one of recognition. Using an existing expert system as a data source. Koton's CASEY was able to construct a case base that allowed it to perform the same task using less computational power and with more graceful degradation at the edges of its knowledge. CHEF is only part of what seems to be a very fruitful approach towards reasoning. In all of these systems, the philosophy is that the search for a solution to a new problem should begin with a solution to an old one. 5. Indexing of Repairs 5.1. TOPS in understanding and repair CHEF'S approach to plan repair is based on the idea that the strategies for repairing a problem should be stored in terms of that problem. In much the same way that it makes sense to organize plans in terms of the goals that they satisfy so that they can be found when the goals arise, it makes sense to store plan repair strategies in terms of the situations to which they apply, so that they too can be found when those problems arise. CHEF builds an explanation in response to every failure. These causal descriptions are used to discriminate between different structures in memory that describe the different interactions that can arise between steps in a plan. These structures are planning versions of the understanding TOPS suggested by Schank [17] to store information relating to complex interactions of goals. The idea behind these structures was simple: just as the concept underlying a primitive action such as ATRANS [19] could be used to store inferences relating to that action, structures relating to the interactions between these actions and the goals that they serve could also be used to store inferences that relate to the interaction. He also suggested that these structures could be used to organize episodes that they describe as well as the inferences that they relate to. Schank argues that these structures are used for understanding in two ways: to supply expectations about a situation and to provide remindings of past situations that are similar to a current one that could be used in generalization. By having structures in memory that correspond to goal interactions rather than to individual goals or actions, an understander can have access to expectations that relate to those interactions and can learn more about them by generalizing across similar instances that they describe. In CHEF, plan repairs are stored in terms of the problems that they solve using a set of planning TOPs that correspond to different planning problems. Each TOP organizes a set of repair strategies, which in turn suggest abstract alterations of the circumstances of the problem defined by the TOP. A TOP is defined not only by the series of steps that define the failure, it is also defined by the goals that those steps were originally designed to satisfy. Only those strategies that will fix the problem defined by a TOP, while at the same time maintaining the goals that are already satisfied by the existing steps, are found under that TOP. Any strategy that can be applied will repair the current problem without interfering with the goals that the steps involved with that problem still achieve. A planning TOP consists of the description of a planning problem and the set of repair strategies that can be applied to solve the problem. Like Schank's TOPS, CHEF'S planning TOPS are memory structures that correspond to the interactions between the steps and states of a plan. Each planning TOP describes a planning problem in terms of a causal vocabulary of effects and preconditions. Instead of storing inferences about the situation described, however, each planning TOP stores possible repairs to the planning problem that it describes. Once a problem is recognized, the planner can use the repairs stored under the TOP that describes it to alter the faulty plan and thus fix the failure. In CHEF, each TOP stores the repair strategies that will fix the faulty plans it describes. These TOPS are not just descriptions of problems. They are descriptions of problems paired with the solutions that can be applied to fix the problems. The strategies under a TOP are those and only those alterations of the causal structure of the problem described by the TOP that can solve that problem. The strategies themselves are not specific repairs, they are the abstract descriptions of changes in the causality of the situation that CHEF knows about. Finding a TOP that corresponds to a problem means finding the possible repairs that can be used to fix that problem. Each of CHEF'S TOPS is stored in memory, indexed by the features of the explanation that describes the problem with which the TOP deals. To get to the strategies that will deal with a problem. CHEF has to explain why it happened and then use this explanation to find the TOP and strategies that will fix the plan. This is a simple idea: the solution to any problem depends on the underlying causality of the problem. It makes sense, then, to index solutions to problems under the causal descriptions of the problem themselves so that these descriptions can be used to access the appropriate solutions. Planning TOPs are memory structures that correspond to the interactions between the steps and states of a plan. Each planning TOP describes a planning problem in terms of a causal vocabulary of effects and preconditions. Instead of storing inferences about the situation described, however, each planning TOP stores possible repairs for the planning problem it corresponds to. Once a problem is recognized, the planner can use the repairs stored under the TOP that describes it to alter the faulty plan and thus fix the failure. For example, the planning TOP, SIDE-EFFECT:BLOCKED-PRECONDITION describes situations in which the side effect of one step violates the preconditions for a later one. One case of this comes from Sussman's [24] example of a planner painting a ladder before painting a ceiling and then finding that the precondition for painting the ceiling, having an available ladder, has been violated by an earlier step in the plan that has left the ladder wet. This TOP is recognized because the planner can see that a step is actually blocked and can then build the explanation that the state that blocks it is the product of an earlier step and the state itself does not satisfy any goals. Once the planner has the TOP, it is then able to apply the different general strategies for repairing the faulty plan that are stored under the structure. In this case, the strategies are: ALTER-PLAN:PRECONDITION: Find a way to paint the ceiling that does not require a ladder, thus avoiding the problem of the failed precondition. ALTER-PLAN:SIDE-EFFECT: Find a way to paint the ladder that does not leave it wet, thus negating the blocking state. REORDER: Paint the ceiling before painting the ladder, thus running the step with the problematic precondition before it is blocked. RECOVER: Do something to dry the ladder, thus reestablishing the state before it is needed for a later step. Each of these strategies is designed to break a single link in the causal chain that leads to a failure. The planning TOPS discussed in this paper were developed for the CHEF program and are built out of a general vocabulary of causal interactions. The problems that they describe include many of those discussed by both Sussman [24] and Sacerdoti [16], but the level of detail of TOPS allows a richer description than was possible using the vocabulary of critics suggested by either of them. As a result, a planner that uses TOPS to diagnose and repair planning problems is able to describe problems in greater detail and thus suggest a wider variety of repairs for each problem encountered. 5.2. Controlling repair A TOP stores those, and only those, strategies that will solve the problem corresponding to the TOP without interfering with the goals that are satisfied by the steps included in the problem. As a result, CHEF is able to control its use of repairs so as to fix existing problems without interfering with the goals satisfied by other steps highlighted in the TOP. This is an important point that deserves an example. In a failure encountered by CHEF while creating a strawberry souffle plan, the problem is diagnosed as a case of SIDE-EFFECT:DISABLED-CONDITION:BALANCE. This TOP is distinguished by the fact that the condition that is disabled is a balance condition and the state that disables it is a side effect. That the state that undermines the plan is a side effect is important in that it allows the strategies RECOVER and ALTER-PLAN:SIDE-EFFECT to be associated with the TOP. The first of these strategies suggests finding a step that will remove the side effect state before it interferes with a later step. The second strategy suggests finding a replacement for the step that caused the side effect with a step that 2 For a detailed discussion of this vocabulary, see Section 9. accomplishes the goal of the original action without producing the undesired effect. Both of these strategies depend on the fact that the state in question is a side effect, that is, a state that does not satisfy any goals that the planner is trying to achieve. If the situation were different (e.g., if the liquid produced by pulping the strawberries satisfied one of the planner's goals), these two strategies would no longer apply. Adding a step that would remove the added liquid would violate the goal that it achieved. Replacing the step that causes the liquid with one that does not would likewise violate the goal. Changing the goals changes the strategies that can be applied to the problem because some strategies will now interfere with those goals while fixing the problem. Such differences between situations are reflected in the TOPs that are found to deal with them. If the liquid served some goal but violated a balance condition, the TOP found would be DESIREDEFFECT:DISABLED-CONDITION:BALANCE. Unlike the TOP SIDE-EFFECT:DISABLEDCONDITION:BALANCE, this structure does not suggest the strategies ALTER-PLAN:SIDEEFFECT and RECOVER, because they would fix the initial problem only at the expense of other goals. Another important point about the relationship between planning TOPS and the strategies they store is the problem of applicability. Every strategy that is stored under a TOP will, if it can be applied, repair the problem that originally accessed the TOP. There is no guarantee, however, that a strategy can be applied in every situation. In the case of a problem CHEF encounters while stir frying beef and broccoli together; one strategy, ADJUNCT-PLAN, suggests finding a step that can be run concurrent with the STIR-FRY step that will absorb the liquid produced by the beef. Unfortunately, no such step exists in CHEF'S knowledge of steps. So ADJUNCT-PLAN, while it would have repaired the plan, cannot be implemented in this situation because the steps required to turn it into an actual change of plan do not exist. Each strategy describes a change that would fix a failed plan, but no strategy can be implemented if the steps that would make that change do not exist. Each TOP describes a specific causal configuration that defines a problem. In turn, each of the strategies under a TOP describes one way to alter the configuration that will solve the problem described by the TOP. Each of the individual strategies suggests a change to one part of the overall configuration. Because the causal configurations that define these TOPS are built out of a vocabulary of interactions that is common to all of them, many share repair strategies. For example, one TOP corresponds to the situation in which the side effect of a step alters the conditions required for the running of a later step. Another corresponds to the problem of the side effect of a step itself being an undesired state. These two TOPS share the feature that a side effect of a step is interfering with the planner's goals, although in different ways. Because of this, they share the strategy ALTER-PLAN:SIDE-EFFECT which suggests finding a step to replace the original step in the plan that causes the side effect with one that achieves the same goals but does not cause the side effect. Each of these TOPS also organizes other strategies, but because some aspects of the problems that they describe are shared, they also share the strategy that suggests changes to that aspect of the problem. The repair strategies stored under a TOP are those, and only those that will repair the problem described by the TOP. Because each strategy alters one aspect of the problem, it may be associated with many TOPS that share that aspect. Each of CHEF's TOPS has two components: the problem features used to index to it and the strategies that it suggests to deal with that problem. CHEF stores its TOPS in a discrimination network, indexed by the features that describe the problem that they correspond to. In searching for a TOP, CHEF extracts these same features from its explanation of the current problem, and then the TOP suggests the strategies it stores to solve that problem. CHEF uses sixteen TOPs to organize its repairs. The details of each are discussed in Section 7. For now, it is only important to understand that each TOP is defined and indexed by the features of a particular planning problem and organizes the two to six strategies that can be applied to the problem it corresponds to. While there are clearly more TOPS that can be used in plan repair than CHEF knows about, those it has describe the planning problems that it is forced to deal with. An expanded domain with a different set of problems, and possible reactions to them, would require an equally expanded set of TOPS and strategies. 6. CHEF'S Repair Algorithm As we mentioned in Section 2, CHEF's process of failure repair has five basic phases: (1) Notice the failure. 3 The liquid produced by stir-frying the beef causes the broccoli to become soggy. (2) Build a causal explanation of why it happened. (3) Use the explanation to find a collection of repair strategies. (4) Instantiate each of the repair strategies, using the specific steps and states in problem. (5) Choose and implement the best of the instantiated repairs. In the sections that follow, we will discuss each of these phases as well as clarify each one with an example from the running of CHEF. 6.1. Noticing the failure The first step in dealing with a failure is noticing it. CHEF is only able to recognize what we call domain failures. These are failures that can be characterized in terms of the operators and states in a domain. It is unable to recognize failures such as described by Wilensky's metagoals [27] (e.g., avoid wasting resources and avoid overly complex plans). This, however, is not because of any problem in terms of the theory of repair expressed in CHEF. It is just a byproduct of the emphasis of the program. CHEF is able to recognize three basic types of failures: (1) failures of a plan to be completed because of a precondition failure on some step; (2) failures of a plan to satisfy one of the goals that it was designed to achieve; (3) failures of a plan because of objectionable results in the outcome. In terms of its domain, CHEF can notice that a plan has stopped when the shrimp cannot be shelled because it is too slippery, that a plan to make a souffle fails because the batter has not risen and that a stir fried dish with fish fails because the fish has developed an iodine taste. After CHEF has built a plan, it runs a simulation of it. This simulation is the program's equivalent of the real world rather than a part of the planning model, and a plan that makes it to simulation is considered complete. The result of this simulation is a table of descriptions that characterize the states of the ingredients used in the plan. Any compound objects created out of those ingredients are also described. The representation used by both CHEF and its simulator is a frame-like notation. In a current reimplementation of CHEF, we are doing the same work using a typed predicate calculus notation. After running a plan to make a beef and broccoli dish, the table of results includes descriptions of the taste, texture and size of the different ingredients as well as the tastes included in the dish as a whole (Fig. 1). Once a simulation is over, CHEF checks the states on this table against the goals that it believes should be satisfied by the plan that it has just run. These goals take the form of state descriptions of the ingredients, the overall dish and the compound items that are built along the way. No matter what its object, each goal defines a particular TASTE or TEXTURE for that object. Goals have the same form as the states placed on the simulator's result table, allowing CHEF to test for their presence after a simulation. CHEF tests for the satisfaction of goals by comparing expected states against those on the table of results. Most of the failures that CHEF is able to recognize are related to the final result of the plan rather than the steps that the plan goes through on the way to that result. This is because CHEF notices failures by looking at that final product rather than monitoring the plan as it is being executed. The only failures that CHEF recognizes that are related to the running of the plan rather than its effects are those that result from the inability to perform a step because the conditions required for that performance are not satisfied. (SIZE OBJECT (BEEF2) SIZE (CHUNK)) (SIZE OBJECT (BRGCCOLI1) SIZE (CHUNK)) (TEXTURE OBJECT (BEEF2) TEXTURE (TENDER)) (TEXTURE OBJECT (BROCCOLI1) TEXTURE (SOGGY)) (TASTE OBJECT (BEEF2) TASTE (SAVORY INTENSITY (9.))) (TASTE OBJECT (BROCCDLI1) TASTE (SAVORY INTENSITY (5.))) (TASTE OBJECT (GARLIC1} TASTE (GARLICY INTEHSITY (9.))) (TASTE OBJECT (DISH) TASTE (AND (SALTY INTENSITY (9.)) (GARLICY INTENSITY (9.)) (SAVORY INTENSITY (9.)) (SAVORY INTENSITY (5.)))) Fig. 1. Section of result table for BEEF-WITH-BROCCOLI. The easiest failure for CHEF to notice is the failure of a plan to finish because of a blocked precondition on a particular step. When this occurs, the simulator stops execution of the plan and informs CHEF that the plan has failed because of a blocked precondition. This is what happens in a plan for SHRIMP-STIR-FRY that CHEF created out of its memory of a FISH-STIR-FRY plan (Fig. 2). The problem with this dish is simply that a MARINATE step has been placed before a SHELL step, making the shrimp too slippery to handle. One type of failure CHEF recognizes is the failure of a plan to run to completion because the preconditions for a step are blocked. More often than not, CHEF'S plans run to completion. Even though a plan may run, this is no guarantee that it will run well or that it will succeed in doing all that it is supposed to do. After a plan finishes, CHEF must check its outcome to be sure that it has not failed to achieve the desired results. CHEF evaluates its results along two dimensions. First, it has to check for any of the plan's goals that might not have been achieved. Second, it has to check for any generally objectionable states that might also have resulted from the plan. Each of CHEF'S plans has a set of goals associated with it. These goals are generated by CHEF using the general role information attached to the general plan type and the specific ingredients of the plan itself. Each role specifies the expected contribution of its fillers and each ingredient provides the particulars of that contribution. These goals take the form of state descriptions of the ingredients, the overall dish and the compound items that are built along the way. No matter what its object, each goal defines a particular TASTE or TEXTURE for that object. These goals all have the same form as the states placed on the simulator's result table, allowing CHEF to test for their presence after a simulation. Once a plan has been run, the goals associated with the plan are searched for on the table of results built up by the simulator. If a goal is not present, then the plan has not succeeded in achieving it. This is Simulating -> Marinate the shrimp in the soy sauce, sesame oil, egg white, sugar, corn starch and rice sine. Unable to simulate -> Shell the shrimp. : failed precondition. RULE: "A thing has to be handleable in order to shell it." Some precondition of step: Shell the shrimp.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multiagent plan repair by combined prefix and suffix reuse

Deterministic domain-independent multiagent planning is an approach to coordination of cooperative agents with joint goals. Provided that the agents act in an uncertain and dynamic environment, such plans can fail. The straightforward approach to recover from such situations is to compute a new plan from scratch, that is to replan. Even though, in a worst case, plan repair or plan re-use does n...

متن کامل

How to Repair Multi-agent Plans: Experimental Approach

Deterministic domain-independent multi-agent planning is an approach to coordination of cooperative agents with joint goals. Provided that the agents act in an imperfect environment, such plans can fail. The straightforward approach to recover from such situations is to compute a new plan from scratch, that is to replan. Even though, in a worst case, plan repair or plan re-use does not yield an...

متن کامل

Query-driven Repairing of Inconsistent DL-Lite Knowledge Bases (Extended Abstract)

We consider the problem of query-driven repairing of inconsistent DL-Lite knowledge bases: query answers are computed under inconsistency-tolerant semantics, and the user provides feedback about which answers are erroneous or missing. The aim is to find a set of ABox modifications (deletions and additions), called a repair plan, that addresses as many of the defects as possible. After formalizi...

متن کامل

Query-Driven Repairing of Inconsistent DL-Lite Knowledge Bases

We consider the problem of query-driven repairing of inconsistent DL-Lite knowledge bases: query answers are computed under inconsistency-tolerant semantics, and the user provides feedback about which answers are erroneous or missing. The aim is to find a set of ABox modifications (deletions and additions), called a repair plan, that addresses as many of the defects as possible. After formalizi...

متن کامل

CrowdAidRepair: A Crowd-Aided Interactive Data Repairing Method

Data repairing aims at discovering and correcting erroneous data in databases. Traditional methods relying on predefined quality rules to detect the conflict between data may fail to choose the right way to fix the detected conflict. Recent efforts turn to use the power of crowd in data repairing, but the crowd power has its own drawbacks such as high human intervention cost and inevitable low ...

متن کامل

Red-Black Relaxed Plan Heuristics Reloaded

Despite its success, the delete relaxation has significant pitfalls. In an attempt to overcome these pitfalls, recent work has devised so-called red-black relaxed plan heuristics, where red variables take the relaxed semantics (accumulating their values), while black variables take the regular semantics. These heuristics were shown to significantly improve over standard delete relaxation heuris...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1987